-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make timer_tick_occurred a second long timer #3065
Conversation
…ogress 1. This function implicitly assumed expiry after 1 timer_tick_occurred. 2. Introduce an explicit timer tick so that the TIMER_LIMIT can be set to an arbitrary number. 3. This addition is utilized in the following commit.
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #3065 +/- ##
==========================================
+ Coverage 89.82% 90.03% +0.20%
==========================================
Files 116 116
Lines 96466 98074 +1608
Branches 96466 98074 +1608
==========================================
+ Hits 86655 88305 +1650
+ Misses 7264 7244 -20
+ Partials 2547 2525 -22 ☔ View full report in Codecov by Sentry. |
This doesn't really say anything. Please clearly state the practical reason for the change. |
lightning/src/ln/channelmanager.rs
Outdated
@@ -2260,15 +2260,15 @@ const CHECK_CLTV_EXPIRY_SANITY: u32 = MIN_CLTV_EXPIRY_DELTA as u32 - LATENCY_GRA | |||
const CHECK_CLTV_EXPIRY_SANITY_2: u32 = MIN_CLTV_EXPIRY_DELTA as u32 - LATENCY_GRACE_PERIOD_BLOCKS - 2*CLTV_CLAIM_BUFFER; | |||
|
|||
/// The number of ticks of [`ChannelManager::timer_tick_occurred`] until expiry of incomplete MPPs | |||
pub(crate) const MPP_TIMEOUT_TICKS: u8 = 3; | |||
pub(crate) const MPP_TIMEOUT_TICKS: u8 = 3 * 60; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets define a constant for the ticks per minute or whatever so that we can change it in the future.
What functionality do we plan to add that will need sub-minute ticks? Also, how do we ensure users migrate to the new duration on their end? This is basically a "silent" breaking change to the API. |
The motivation here is to retry invoice_requests. For non-mobile in a world where we can route onion messages, its probably not useful because we can route onion messages along many paths and retrying is probably not useful. On mobile its kinda useful, especially before we can route, where we may have lost connectivity and we just need to retry a second later. Sadly, there's not really a great way to do this sans-std. |
The alternative options here are basically (a) do it during message handling using std to detect time passing, (b) do it during message handling without time passing, (c) this (maybe with renaming the method to break the API). I'm a bit split between (b) and (c), honestly. |
I see, (b) is probably still not reliable enough for our needs though if we're not receiving messages (pings don't make it through the |
1. Previously, timer_tick_occurred was designed to be called every minute, leading to TIMER_LIMITS set in minutes. 2. However, this restricted the ability to set timer_tick length in smaller durations. 3. This commit updates timer_tick_occurred to be called every second instead of every minute. 4. All TIMER_LIMITS are adjusted accordingly to reflect this change. 5. Additionally, a test is updated to ensure successful compilation post-update.
Updated from pr3065.01 to pr3065.02 (diff): Changes:
@TheBlueMatt |
Hmm... that may work as |
It gets called in |
Yeah... though I'm not sure how we'd do that rate limiting. Would an exponential backoff be fine? Also note that We also may want to consider using different reply paths on retries (and maybe the initial sends for that matter) in case the reply path used is the issue. |
Another alternative could be to extend the onion message handler to signal when we read a message from the peer that retries should be queued. |
ISTM if we want to avoid this we'll want a new method on The other option, which @wpaulino suggested offline, is to hook deeply into the |
Hi @TheBlueMatt,
Thanks a lot for your guidance! |
I don't see why we wouldn't use the normal flow - it tends to get drained very quickly.
We should probably only retry once.
Indeed, it won't help in those cases, however I'm not entirely clear on what we can do for those cases. If the recipient is often-offline, they probably shouldn't be using BOLT12 to receive (pre-async-payments or unless they have a fairly robust notification/store+forward mechanism), and if they are at least somewhat robust trying again in one second seems unlikely to help and more than trying again via a separate path or some small time delta later. |
This PR updates timer_tick_occurred to operate on a second-long timer, providing finer timing control.
Reasoning: